Skip to content

Conversation

@sbryngelson
Copy link
Member

@sbryngelson sbryngelson commented Feb 5, 2026

User description

User description

Summary

Follow-up improvements to the CI and developer experience work from #1122:

  • Auto-install shell completions: On first ./mfc.sh run, completions are installed to ~/.local/share/mfc/completions/ and the source line is added to .bashrc/.zshrc
  • Auto-update completions: When generated completion files change, installed copies are automatically updated
  • Pre-commit hook improvements: Cap parallelism at 12 jobs, show job count in output
  • Tab completion fixes: Prevent unwanted directory completion fallback
  • Documentation: Clarify -v/-vv/-vvv (verbosity), --debug (compiler flags), and --debug-log (toolchain debugging)

Changes

  • mfc.sh: Auto-install completions with shell rc setup
  • toolchain/main.py: Auto-update installed completions when regenerated
  • toolchain/mfc/cli/completion_gen.py: Fix bash/zsh completion options
  • .githooks/pre-commit: Cap at 12 jobs, show count
  • toolchain/mfc/cli/commands.py: Clarify flag documentation

Test plan

  • Run ./mfc.sh on fresh clone - completions should auto-install
  • Delete ~/.local/share/mfc/completions/ and run again - should reinstall
  • Tab completion should not suggest directories like build/
  • Pre-commit hook should show -j N where N ≤ 12

🤖 Generated with Claude Code

PR Type

Enhancement, Tests

Description

  • Add CI lint-gate job and local precheck command for early validation

  • Auto-install git pre-commit hook and shell completions on first run

  • Gate benchmarks on test suite completion to prevent wasted HPC resources

  • Improve documentation clarity for debug, verbosity, and logging flags

  • Add concurrency groups to CI workflows to cancel superseded runs

Diagram Walkthrough

flowchart LR
  A["mfc.sh startup"] --> B["Auto-install pre-commit hook"]
  A --> C["Auto-install shell completions"]
  D["./mfc.sh precheck"] --> E["Format check"]
  D --> F["Spell check"]
  D --> G["Lint checks"]
  H["CI: test.yml"] --> I["Lint Gate job"]
  I --> J["Github test matrix"]
  J --> K["wait-for-tests job"]
  K --> L["Benchmark jobs"]
Loading

File Walkthrough

Relevant files
Enhancement
7 files
main.py
Auto-update installed completions when regenerated             
+31/-3   
commands.py
Add precheck command and clarify flag documentation           
+26/-4   
mfc.sh
Auto-install pre-commit hook and shell completions             
+38/-0   
precheck.sh
New precheck script for local CI validation                           
+138/-0 
pre-commit
New git pre-commit hook running precheck                                 
+30/-0   
test.yml
Add lint-gate job and concurrency groups                                 
+45/-3   
bench.yml
Add wait-for-tests job and concurrency groups                       
+59/-3   
Bug fix
1 files
completion_gen.py
Fix bash/zsh completion options and prevent directory fallback
+10/-5   
Documentation
1 files
docs_gen.py
Clarify debug and verbosity flag documentation                     
+10/-3   
Configuration changes
2 files
cleanliness.yml
Add concurrency groups to cancel superseded runs                 
+4/-0     
coverage.yml
Add concurrency groups to cancel superseded runs                 
+4/-0     

CodeAnt-AI Description

Shell help overhaul, resilient CI jobs, and clearer build output & completions

What Changed

  • Help and quick-start now show a compact, colorized overview with truncated command descriptions and single-line quick-start for faster, easier reading.
  • Build output in interactive streaming merges stderr into stdout so live build progress no longer deadlocks; error panels display full, trimmed outputs for clearer failure details.
  • Automatic installation and update of shell completions on first run; completions are placed under ~/.local/share/mfc/completions and sourced or added to shell rc files automatically.
  • CI and runner scripts: added retry loop for build/test steps (up to 3 attempts with clean+wait between tries), ignore SIGHUP to survive login-node drops, pass verbose flag to test/build commands, and switch job submission to safer monitor-based tracking (extracts and prints sbatch job id).
  • GitHub workflows: benchmark job now triggers after Test Suite completion, many workflows limit push triggers to master, and concurrency groups improved to cancel superseded runs.
  • Python/venv bootstrap: lowered minimum supported Python minor version to 3.10, use faster installer when available with controlled verbosity and clearer installer output and failure messages.

Impact

✅ Clearer build failure output
✅ Fewer CI wasted runs (retries + test-gated benchmarks)
✅ Immediate tab completion setup on first run

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

sbryngelson and others added 17 commits February 4, 2026 16:17
- Add lint-gate job to test.yml that runs fast checks (formatting,
  spelling, linting) before expensive test matrix and HPC jobs start
- Add concurrency groups to test.yml, coverage.yml, cleanliness.yml,
  and bench.yml to cancel superseded runs on new pushes
- Add ./mfc.sh precheck command for local CI validation before pushing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add cross-platform hash function (macOS uses md5, Linux uses md5sum)
- Validate -j/--jobs argument (require value, must be numeric)
- Improve error messages with actionable guidance
- Clarify that formatting has been auto-applied when check fails

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add wait-for-tests job that polls GitHub API to ensure:
- Lint Gate passes first (fast fail)
- All Github test jobs complete successfully
- Only then do benchmark jobs start

This prevents wasting HPC resources on benchmarking code that
fails tests, while preserving the existing maintainer approval gate.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add .githooks/pre-commit that runs ./mfc.sh precheck before commits
- Auto-install hook on first ./mfc.sh invocation (symlinks to .git/hooks/)
- Hook only installs once; subsequent runs skip if already present
- Developers can bypass with: git commit --no-verify

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Auto-detect available CPUs for parallel formatting:
- Linux: nproc
- macOS: sysctl -n hw.ncpu
- Fallback: 4

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Keep both: pre-commit hook auto-install and 'Starting...' log message

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Register precheck in commands.py so it appears in:
- Shell tab completion
- CLI documentation
- ./mfc.sh precheck --help

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When completion scripts are auto-regenerated, also update the installed
completions at ~/.local/share/mfc/completions/ if they exist.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When installed shell completions are auto-updated, print a message
with the appropriate source command for the user's detected shell.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, installed completions only updated when source files
changed and regeneration occurred. Now we also check if the installed
completions are older than the generated ones (e.g., after git pull
brings new pre-generated completions).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove -o bashdefault from bash complete command to prevent
  falling back to directory completion when no matches found
- Add explicit : (no-op) for zsh commands without arguments to
  prevent default file/directory completion

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Auto-install completions on first mfc.sh run (via main.py)
- Add -o filenames back to bash complete (needed for file completion)
- Keep -o bashdefault removed to prevent directory fallback
- Simplify code by having __update_installed_completions handle both
  install and update cases

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move completion auto-install to mfc.sh so it runs for ALL commands
including help, precheck, etc. This ensures completions are always
set up on first run.

- Install completion files to ~/.local/share/mfc/completions/
- Add source line to .bashrc or fpath to .zshrc
- Tell user to restart shell or source the file
- main.py now only handles updates when generated files change

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- -v/-vv/-vvv: output verbosity levels
- --debug: build with debug compiler flags (for MFC Fortran code)
- --debug-log/-d: Python toolchain debug logging (not MFC code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Avoid hogging resources on machines with many cores.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously checked if ~/.local/share/mfc/completions/ existed.
Now checks if the actual completion file exists for the user's shell.
This handles the edge case of an empty completions directory.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings February 5, 2026 15:30
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 5, 2026

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@cursor
Copy link

cursor bot commented Feb 5, 2026

You have run out of free Bugbot PR reviews for this billing cycle. This will reset on March 8.

To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.
📝 Walkthrough

Walkthrough

Centralizes shell-completion installation into toolchain/bootstrap/completions.sh, standardizes quoted argument forwarding across bootstrap scripts, adds verbose install output to the Python bootstrap, refactors help/error formatting, updates many GitHub Actions triggers (bench now responds to Test Suite workflow_run), and makes multiple SLURM/CI script robustness and minor fixes.

Changes

Cohort / File(s) Summary
Top-level bootstrap & entry
mfc.sh
Delegates completion install/update to toolchain/bootstrap/completions.sh; adds early upgrade-check for legacy bootstrap/build dirs; only passes "$@" to main.py when a command is present; consistently quotes forwarded args.
Completions installer
toolchain/bootstrap/completions.sh
New _mfc_setup_completions() installs/updates bash and zsh completions under ~/.local/share/mfc/completions, requires generated sources, appends RC sourcing once, and activates completions when sourced.
Python bootstrap
toolchain/bootstrap/python.sh
Adds verbose flags (-v, -vv, -vvv, --verbose); separates verbose vs non-verbose install/log handling and preserves copying toolchain/pyproject.toml to build/ on success.
Help & build output
toolchain/mfc/user_guide.py, toolchain/mfc/build.py
User help reformatted to a compact inline layout with truncated descriptions; build error display now prints full non-empty stdout/stderr (removed truncation).
CI workflows (bench + others)
.github/workflows/bench.yml, .github/workflows/test.yml, .github/workflows/coverage.yml, .github/workflows/*
bench.yml now triggers on workflow_run (Test Suite completion) and workflow_dispatch, replaces polling with explicit PR info step, wires clone ref to workflow_run.head_sha, and simplifies self-job conditions; many workflows change push trigger to branches: [master].
Frontier / SLURM submit & build scripts
.github/workflows/frontier/..., .github/workflows/frontier_amd/..., .github/workflows/phoenix/...
Added SIGHUP-trap resilience, capture/parse sbatch output, introduce output_file variable, change SBATCH account from CFD154 to ENG160, replace blocking -W with external monitor invocation, and add submission error handling.
Frontier build retry
.github/workflows/frontier/build.sh, .github/workflows/frontier_amd/build.sh
Add retry loop for bench/test runs with per-attempt status, delays, and exit codes when attempts exhausted.
Minor fixes
.github/workflows/frontier_amd/test.sh, other workflow ymls
Whitespace fix in GPU command args; unify workflow trigger formatting across many YAMLs.
Tooling manifest
toolchain/pyproject.toml
Pins pyrometheus dependency to a specific commit hash.

Sequence Diagram(s)

sequenceDiagram
    participant TestSuite as Test Suite (workflow_run)
    participant BenchWF as bench.yml workflow
    participant FileChanges as file-changes job
    participant GHAPI as GitHub API
    participant Matrix as Matrix jobs

    TestSuite->>BenchWF: emit workflow_run (conclusion: success)
    BenchWF->>FileChanges: start (uses workflow_run.head_sha)
    FileChanges->>GHAPI: Get PR info (pr_number, author, approvals)
    GHAPI-->>FileChanges: pr_number / pr_author / pr_approved
    FileChanges-->>BenchWF: outputs (checkall, pr_number, pr_approved)
    BenchWF->>Matrix: trigger matrix jobs (ref = head_sha or fallback)
    Matrix->>Matrix: clone repo and run build/test
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Review effort 3/5, size:M

Suggested reviewers

  • wilfonba

Poem

🐇 I hopped through scripts and smoothed the tracks,
Completions gathered in neat little stacks,
CI waits for tests to finish their run,
Installers whisper verbose — job done,
A rabbit chuckles softly at tidy hacks.

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive PR description is comprehensive with summary, changes, test plan, and diagrams, but lacks structured compliance with required template sections. Complete missing template sections: add explicit 'Type of change' checkboxes, detailed test configuration, and verification that all checklist items have been addressed before merging.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Shell completion auto-install and pre-commit hook improvements' directly summarizes a major component of the changeset (shell completion auto-install and pre-commit hook), though it does not fully capture all changes such as CI/workflow updates and bootstrap improvements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codeant-ai codeant-ai bot added the size:L This PR changes 100-499 lines, ignoring generated files label Feb 5, 2026
@qodo-code-review
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Possible Issue

__update_installed_completions references os (and prints via cons) without showing any local imports in the function. This works only if os/cons are guaranteed to be available as module globals; otherwise this will raise at runtime when regeneration runs. Consider making dependencies explicit inside the function or ensuring they are imported in the module scope.

def __update_installed_completions(toolchain: str):
    """Update installed shell completions if they're older than generated ones."""
    import shutil  # pylint: disable=import-outside-toplevel
    from pathlib import Path  # pylint: disable=import-outside-toplevel

    src_dir = Path(toolchain) / "completions"
    dst_dir = Path.home() / ".local" / "share" / "mfc" / "completions"

    # Only update if already installed (mfc.sh handles initial install)
    if not dst_dir.exists():
        return

    # Update if installed but older than generated
    updated = False
    for name in ["mfc.bash", "_mfc"]:
        if (src_dir / name).exists() and (dst_dir / name).exists():
            if os.path.getmtime(src_dir / name) > os.path.getmtime(dst_dir / name):
                shutil.copy2(src_dir / name, dst_dir / name)
                updated = True

    if updated:
        is_zsh = "zsh" in os.environ.get("SHELL", "")
        src_cmd = f"source {dst_dir}/_mfc" if is_zsh else f"source {dst_dir}/mfc.bash"
        cons.print(f"[dim]Tab completions updated. Run: {src_cmd}[/dim]")
Duplication

The “lint gate” checks are duplicated between this script and .github/workflows/test.yml (format/spelling/toolchain lint + three grep-based source lints). This increases the chance of drift (different patterns, excludes, or behavior). Consider factoring the grep-based source lint checks into a single shared script invoked by both CI and precheck, or having CI call ./mfc.sh precheck to keep a single source of truth.

log "Running$MAGENTA precheck$COLOR_RESET (same checks as CI lint-gate)..."
echo ""

# 1. Check formatting
log "[$CYAN 1/4$COLOR_RESET] Checking$MAGENTA formatting$COLOR_RESET..."
# Capture state before formatting
BEFORE_HASH=$(git diff -- '*.f90' '*.fpp' '*.py' 2>/dev/null | compute_hash)
if ! ./mfc.sh format -j "$JOBS" > /dev/null 2>&1; then
    error "Formatting check failed to run."
    FAILED=1
else
    # Check if formatting changed any Fortran/Python files
    AFTER_HASH=$(git diff -- '*.f90' '*.fpp' '*.py' 2>/dev/null | compute_hash)
    if [ "$BEFORE_HASH" != "$AFTER_HASH" ]; then
        error "Code was not formatted. Files have been auto-formatted; review and stage the changes."
        echo ""
        git diff --stat -- '*.f90' '*.fpp' '*.py' 2>/dev/null || true
        echo ""
        FAILED=1
    else
        ok "Formatting check passed."
    fi
fi

# 2. Spell check
log "[$CYAN 2/4$COLOR_RESET] Running$MAGENTA spell check$COLOR_RESET..."
if ./mfc.sh spelling > /dev/null 2>&1; then
    ok "Spell check passed."
else
    error "Spell check failed. Run$MAGENTA ./mfc.sh spelling$COLOR_RESET for details."
    FAILED=1
fi

# 3. Lint toolchain (Python)
log "[$CYAN 3/4$COLOR_RESET] Running$MAGENTA toolchain lint$COLOR_RESET..."
if ./mfc.sh lint > /dev/null 2>&1; then
    ok "Toolchain lint passed."
else
    error "Toolchain lint failed. Run$MAGENTA ./mfc.sh lint$COLOR_RESET for details."
    FAILED=1
fi

# 4. Source code lint checks
log "[$CYAN 4/4$COLOR_RESET] Running$MAGENTA source lint$COLOR_RESET checks..."
SOURCE_FAILED=0

# Check for raw OpenACC/OpenMP directives
if grep -qiR '!\$acc\|!\$omp' --exclude="parallel_macros.fpp" --exclude="acc_macros.fpp" --exclude="omp_macros.fpp" --exclude="shared_parallel_macros.fpp" --exclude="syscheck.fpp" ./src/* 2>/dev/null; then
    error "Found raw OpenACC/OpenMP directives. Use macros instead."
    SOURCE_FAILED=1
fi

# Check for double precision intrinsics
if grep -qiR 'double_precision\|dsqrt\|dexp\|dlog\|dble\|dabs\|double\ precision\|real(8)\|real(4)\|dprod\|dmin\|dmax\|dfloat\|dreal\|dcos\|dsin\|dtan\|dsign\|dtanh\|dsinh\|dcosh\|d0' --exclude-dir=syscheck --exclude="*nvtx*" --exclude="*precision_select*" ./src/* 2>/dev/null; then
    error "Found double precision intrinsics. Use generic intrinsics."
    SOURCE_FAILED=1
fi

# Check for junk code patterns
if grep -qiR -e '\.\.\.' -e '\-\-\-' -e '===' ./src/* 2>/dev/null; then
    error "Found junk code patterns (..., ---, ===) in source."
    SOURCE_FAILED=1
fi

if [ $SOURCE_FAILED -eq 0 ]; then
    ok "Source lint passed."
else
    FAILED=1
fi
CI Fragility

The new polling job uses gh api .../check-runs and assumes check-run names like Lint Gate and names starting with Github. If check names change (matrix naming, capitalization) the gate may hang until timeout or incorrectly block benchmarks. Also consider whether GITHUB_TOKEN has sufficient permissions in all trigger contexts (notably PRs from forks) for querying check-runs; otherwise this job may fail unexpectedly.

wait-for-tests:
  name: Wait for Test Suite
  runs-on: ubuntu-latest
  steps:
    - name: Wait for Test Suite to Pass
      env:
        GH_TOKEN: ${{ github.token }}
      run: |
        echo "Waiting for Test Suite workflow to complete..."
        SHA="${{ github.event.pull_request.head.sha || github.sha }}"

        # Poll every 60 seconds for up to 3 hours
        for i in $(seq 1 180); do
          # Get the Test Suite workflow runs for this commit
          STATUS=$(gh api repos/${{ github.repository }}/commits/$SHA/check-runs \
            --jq '.check_runs[] | select(.name == "Lint Gate") | .conclusion' | head -1)

          if [ "$STATUS" = "success" ]; then
            echo "Lint Gate passed. Checking test jobs..."

            # Check if any Github test jobs failed
            FAILED=$(gh api repos/${{ github.repository }}/commits/$SHA/check-runs \
              --jq '[.check_runs[] | select(.name | startswith("Github")) | select(.conclusion == "failure")] | length')

            if [ "$FAILED" != "0" ]; then
              echo "::error::Test Suite has failing jobs. Benchmarks will not run."
              exit 1
            fi

            # Check if Github tests are still running
            PENDING=$(gh api repos/${{ github.repository }}/commits/$SHA/check-runs \
              --jq '[.check_runs[] | select(.name | startswith("Github")) | select(.conclusion == null)] | length')

            if [ "$PENDING" = "0" ]; then
              echo "All Test Suite jobs completed successfully!"
              exit 0
            fi

            echo "Tests still running ($PENDING pending)..."
          elif [ "$STATUS" = "failure" ]; then
            echo "::error::Lint Gate failed. Benchmarks will not run."
            exit 1
          else
            echo "Lint Gate status: ${STATUS:-pending}..."
          fi

          sleep 60
        done

        echo "::error::Timeout waiting for Test Suite to complete."
        exit 1

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 5, 2026

CodeAnt AI finished reviewing your PR.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements follow-up improvements to the CI and developer experience workflow introduced in #1122, focusing on shell completion auto-installation, completion auto-updates, and pre-commit hook enhancements.

Changes:

  • Auto-install shell completions to ~/.local/share/mfc/completions/ on first ./mfc.sh run, with automatic rc file configuration for bash/zsh
  • Auto-update installed completions when generated files are newer than installed copies
  • Enhanced pre-commit hook with parallelism capped at 12 jobs and visible job count in output
  • Fixed bash/zsh tab completion to prevent unwanted directory fallback by removing -o bashdefault
  • Clarified documentation distinguishing --debug (compiler flags), -v/-vv/-vvv (verbosity), and --debug-log (toolchain debugging)

Reviewed changes

Copilot reviewed 1 out of 1 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
mfc.sh Auto-install shell completions and pre-commit hook on first run, with rc file modifications
toolchain/main.py Auto-update installed completions when regenerated files are newer
toolchain/mfc/cli/completion_gen.py Remove -o bashdefault from bash completions and add explicit no-op for empty zsh commands
toolchain/bootstrap/precheck.sh New precheck script for local CI validation before commits
toolchain/mfc/cli/commands.py Add PRECHECK_COMMAND definition and clarify flag documentation
toolchain/mfc/cli/docs_gen.py Update documentation for debug and verbosity flags
.githooks/pre-commit New pre-commit hook with capped parallelism (max 12 jobs)
.github/workflows/test.yml Add lint-gate job and concurrency groups
.github/workflows/coverage.yml Add concurrency configuration
.github/workflows/cleanliness.yml Add concurrency configuration
.github/workflows/bench.yml Add concurrency and wait-for-tests job with polling logic
Comments suppressed due to low confidence (2)

mfc.sh:42

  • The shell completion auto-install logic modifies user's shell rc files (.bashrc/.zshrc) without explicit user consent. This could be considered intrusive. Consider either:
  1. Prompting the user before modifying their shell rc files, or
  2. Printing a clear message explaining what was added and how to remove it if desired.

Additionally, the check on line 38 only verifies if COMPLETION_DIR exists in the rc file, which could produce false positives if the directory path appears in an unrelated context.

    if [ -f "$RC_FILE" ] && ! grep -q "$COMPLETION_DIR" "$RC_FILE" 2>/dev/null; then
        echo "" >> "$RC_FILE"
        echo "# MFC shell completion" >> "$RC_FILE"
        echo "$RC_LINE" >> "$RC_FILE"
    fi

mfc.sh:16

  • The symlink creation could fail silently if the user doesn't have write permissions to .git/hooks/ or if the target doesn't exist. Consider adding error handling to inform the user if the hook installation fails.
if [ -d "$(pwd)/.git" ] && [ ! -e "$(pwd)/.git/hooks/pre-commit" ] && [ -f "$(pwd)/.githooks/pre-commit" ]; then
    ln -sf "$(pwd)/.githooks/pre-commit" "$(pwd)/.git/hooks/pre-commit"
    log "Installed git pre-commit hook (runs$MAGENTA ./mfc.sh precheck$COLOR_RESET before commits)."

Check if installed completions are older than source files and
update them automatically. Shows message with source command
only on install or update, silent otherwise.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
sbryngelson and others added 3 commits February 8, 2026 13:04
Previously, workflows triggered on both push and pull_request events,
causing duplicate CI runs when pushing to PR branches. Now push events
only trigger CI on master, while PRs are tested via pull_request event.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The previous fix could incorrectly install zsh completions when running
mfc.sh in bash if the user's login shell ($SHELL) was zsh. Now we only
fall back to $SHELL when BASH_VERSION is also unset.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.github/workflows/test.yml:
- Around line 132-136: The workflow uses unquoted command substitution $(nproc)
which triggers shellcheck SC2046/actionlint; update the two command lines that
invoke "/bin/bash mfc.sh test -v ..." to quote the substitution as "$(nproc)" so
the parallelism arg is passed safely (keep $OPT1/$OPT2 unquoted as intended);
locate the occurrences by searching for the string "/bin/bash mfc.sh test -v"
and replace $(nproc) with "$(nproc)".

sbryngelson and others added 2 commits February 8, 2026 16:42
- Remove verbose flag from GitHub runner and coverage CI commands
  to prevent build hangs from pipe buffer exhaustion
- Pin pyrometheus to known-good commit 4983340 to fix potential
  chemistry build failures on Frontier

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add retry logic (3 attempts) to Frontier and Frontier AMD build
  scripts to handle transient Lustre I/O errors during compilation
- Replace fragile sbatch -W with monitor_slurm_job.sh in all submit
  scripts (Frontier, Frontier AMD, Phoenix) for resilient job monitoring
  with exponential backoff and sacct fallback
- Add SIGHUP trap to submit scripts to survive login node session drops

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 8, 2026

CodeAnt AI is running Incremental review


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@codeant-ai codeant-ai bot added size:L This PR changes 100-499 lines, ignoring generated files and removed size:L This PR changes 100-499 lines, ignoring generated files labels Feb 8, 2026
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 5 files (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name=".github/workflows/phoenix/submit.sh">

<violation number="1" location=".github/workflows/phoenix/submit.sh:71">
P2: The job ID parsing is too permissive; any digits in an sbatch error message will be treated as a valid job ID, causing the script to monitor a nonexistent job. Parse only the "Submitted batch job <id>" line.</violation>
</file>

<file name=".github/workflows/frontier_amd/submit.sh">

<violation number="1" location=".github/workflows/frontier_amd/submit.sh:61">
P2: The job ID parsing is too broad; it can capture unrelated numbers from sbatch warnings/errors and continue with a bogus job ID. Parse only the "Submitted batch job" line to avoid monitoring the wrong job.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

)

job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
if [ -z "$job_id" ]; then
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The job ID parsing is too permissive; any digits in an sbatch error message will be treated as a valid job ID, causing the script to monitor a nonexistent job. Parse only the "Submitted batch job " line.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/phoenix/submit.sh, line 71:

<comment>The job ID parsing is too permissive; any digits in an sbatch error message will be treated as a valid job ID, causing the script to monitor a nonexistent job. Parse only the "Submitted batch job <id>" line.</comment>

<file context>
@@ -61,4 +64,18 @@ job_interface="$3"
+)
+
+job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+if [ -z "$job_id" ]; then
+    echo "ERROR: Failed to submit job. sbatch output:"
+    echo "$submit_output"
</file context>
Fix with Cubic

EOT
)

job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Feb 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The job ID parsing is too broad; it can capture unrelated numbers from sbatch warnings/errors and continue with a bogus job ID. Parse only the "Submitted batch job" line to avoid monitoring the wrong job.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/frontier_amd/submit.sh, line 61:

<comment>The job ID parsing is too broad; it can capture unrelated numbers from sbatch warnings/errors and continue with a bogus job ID. Parse only the "Submitted batch job" line to avoid monitoring the wrong job.</comment>

<file context>
@@ -53,4 +56,17 @@ job_interface="$3"
 EOT
+)
+
+job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+if [ -z "$job_id" ]; then
+    echo "ERROR: Failed to submit job. sbatch output:"
</file context>
Suggested change
job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
job_id=$(echo "$submit_output" | awk '/Submitted batch job/ {print $4}')
Fix with Cubic

EOT
)

job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The job ID extraction uses grep -oE '[0-9]+' directly on the sbatch output, which returns every numeric substring on separate lines; if the output ever contains more than one number, job_id will contain multiple IDs separated by newlines, causing the monitoring script to receive an invalid job identifier and fail to track the job correctly. Limiting the extraction to the first numeric match avoids this and ensures job_id is a single valid token. [logic error]

Severity Level: Major ⚠️
- ❌ Array jobs monitored with malformed, multi-token SLURM job IDs.
- ⚠️ Frontier submit wrapper unreliable for scripts using --array.
- ⚠️ Downstream monitoring scripts may misreport job status.
Suggested change
job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
job_id=$(echo "$submit_output" | grep -oE '[0-9]+' | head -n1)
Steps of Reproduction ✅
1. In the repo root, create a SLURM batch script `array_job.sh` that defines an array job,
e.g.:

   - `#SBATCH --array=0-3`

   This script will later be passed as `$1` to `.github/workflows/frontier/submit.sh`.

2. On a SLURM system, run the wrapper:

   - `.github/workflows/frontier/submit.sh array_job.sh cpu`

   This executes the logic in `.github/workflows/frontier/submit.sh` including the
   `sbatch` submission block and captures `sbatch` output into `submit_output`.

3. SLURM prints a message like `Submitted batch job 123456_0` (typical for array jobs) to
stdout. The script at `.github/workflows/frontier/submit.sh:31-58` captures this into
`submit_output`, then the block at lines 61-66 executes:

   - `job_id=$(echo "$submit_output" | grep -oE '[0-9]+')`

   `grep -oE '[0-9]+'` returns two lines: `123456` and `0`, so `job_id` contains a
   newline-separated list of two numeric tokens.

4. The monitoring code at `.github/workflows/frontier/submit.sh:71-72` runs:

   - `bash "$SCRIPT_DIR/../../scripts/monitor_slurm_job.sh" "$job_id" "$output_file"`

   Because `"$job_id"` contains a newline, `monitor_slurm_job.sh` receives an invalid job
   identifier (either as a value with embedded newline or as multiple numeric tokens,
   depending on its parsing). This causes incorrect monitoring behavior for the submitted
   job array (e.g., failing SLURM commands like `squeue -j` or tracking the wrong job),
   demonstrating that extracting all numeric substrings from `submit_output` produces an
   invalid job ID when multiple numbers are present.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .github/workflows/frontier/submit.sh
**Line:** 61:61
**Comment:**
	*Logic Error: The job ID extraction uses `grep -oE '[0-9]+'` directly on the `sbatch` output, which returns every numeric substring on separate lines; if the output ever contains more than one number, `job_id` will contain multiple IDs separated by newlines, causing the monitoring script to receive an invalid job identifier and fail to track the job correctly. Limiting the extraction to the first numeric match avoids this and ensures `job_id` is a single valid token.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

EOT
)

job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The job ID extraction uses a bare grep -oE '[0-9]+' on the entire sbatch output, so any digits from additional messages (e.g., debug logs or some error texts that still return exit 0) can be misinterpreted as the job ID, causing the monitor to track the wrong job or think submission succeeded when it did not. Restrict the parsing to the standard "Submitted batch job NNNNN" line and extract the numeric field from that line only. [logic error]

Severity Level: Major ⚠️
- ❌ monitor_slurm_job.sh may track wrong SLURM job.
- ⚠️ Frontier CI runs may be misreported as successful.
Suggested change
job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
job_id=$(echo "$submit_output" | grep -E 'Submitted batch job[[:space:]]+[0-9]+' | head -n1 | awk '{print $4}')
Steps of Reproduction ✅
1. On a Frontier login node, create a simple SLURM script `run.sh` in the repo root that
just echoes a message and exits 0.

2. Run `.github/workflows/frontier_amd/submit.sh run.sh cpu` so that
`.github/workflows/frontier_amd/submit.sh:1-72` executes and calls `sbatch <<EOT` to
submit the job.

3. On clusters where `sbatch` prepends informational or warning lines containing digits
(e.g., maintenance date or policy message) before the standard `Submitted batch job NNNNN`
line, those lines become part of `submit_output` at
`.github/workflows/frontier_amd/submit.sh:34-59`.

4. At `.github/workflows/frontier_amd/submit.sh:61`, `job_id=$(echo "$submit_output" |
grep -oE '[0-9]+')` captures *all* numeric tokens (one per line), so `job_id` becomes a
multi-line string like `2025\n02\n10\n12345`; when passed to
`.github/scripts/monitor_slurm_job.sh` at lines 71-72 as the first argument, the
monitoring script is given an invalid or wrong job ID and will monitor the wrong job or
fail to monitor at all.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .github/workflows/frontier_amd/submit.sh
**Line:** 61:61
**Comment:**
	*Logic Error: The job ID extraction uses a bare `grep -oE '[0-9]+'` on the entire sbatch output, so any digits from additional messages (e.g., debug logs or some error texts that still return exit 0) can be misinterpreted as the job ID, causing the monitor to track the wrong job or think submission succeeded when it did not. Restrict the parsing to the standard "Submitted batch job NNNNN" line and extract the numeric field from that line only.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

EOT No newline at end of file
EOT
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The job ID is parsed by grepping for any number in the sbatch output, so if sbatch prints additional numeric tokens (for example in warnings or extra messages), the script may capture multiple numbers or the wrong one, causing the monitoring script to track an invalid job or fail; instead, parse the specific "Submitted batch job" line and extract only the job ID field. [logic error]

Severity Level: Major ⚠️
- ❌ Failed sbatch treated as successful job submission.
- ⚠️ monitor_slurm_job.sh tracks nonexistent or wrong SLURM job.
- ⚠️ Users misled about status of cluster submissions.
Suggested change
job_id=$(echo "$submit_output" | awk '/Submitted batch job/ {print $4; exit}')
Steps of Reproduction ✅
1. On a SLURM cluster, in the repo root, create a job script `bad_job.sh` whose SBATCH
options cause `sbatch` to reject it but emit an error containing digits (for example an
invalid time limit), ensuring `.github/workflows/phoenix/submit.sh` can read it at lines
12–15.

2. Run the submission helper: `.github/workflows/phoenix/submit.sh bad_job.sh cpu none`;
this calls `sbatch` via the here‑document started at line 40 and ended at line 66,
capturing all `sbatch` stdout/stderr into `submit_output`.

3. Because `sbatch` rejected the job, its output does not contain the normal `Submitted
batch job <id>` line, but does contain numeric tokens in the error message; at line 69,
`grep -oE '[0-9]+'` extracts all digit sequences from that error so `job_id` becomes a
concatenation of those numbers, and the non‑empty check at lines 69–73 passes as if
submission succeeded.

4. The script then prints `Submitted batch job $job_id` (line 76) and invokes
`scripts/monitor_slurm_job.sh` with this bogus job ID at lines 79–80, causing the
monitoring script to watch a non‑existent or wrong job and never report the actual
submission failure.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .github/workflows/phoenix/submit.sh
**Line:** 69:69
**Comment:**
	*Logic Error: The job ID is parsed by grepping for any number in the sbatch output, so if sbatch prints additional numeric tokens (for example in warnings or extra messages), the script may capture multiple numbers or the wrong one, causing the monitoring script to track an invalid job or fail; instead, parse the specific "Submitted batch job" line and extract only the job ID field.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 8, 2026

CodeAnt AI Incremental review completed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @.github/workflows/frontier_amd/build.sh:
- Around line 24-26: The variable dirname is assigned but never used inside the
loop (see the for loop over dir and the assignment dirname=$(basename "$dir")),
so remove the unused assignment line to avoid confusion; if the basename was
intended to be used later, either replace occurrences to use dirname or actually
use dirname in relevant commands (e.g., in the call to ./mfc.sh), otherwise
delete the dirname=$(basename "$dir") line.

In @.github/workflows/frontier/build.sh:
- Around line 24-26: The script assigns dirname=$(basename "$dir") inside the
for loop but never uses it; remove the unused assignment (the dirname variable)
from the loop in the build.sh so the loop simply iterates over benchmarks/*/ and
runs ./mfc.sh without creating dirname, or alternatively use dirname in a
log/message if you intended to reference the directory name; update the loop
around the for dir in benchmarks/*/; do and the ./mfc.sh run invocation
accordingly.

In @.github/workflows/phoenix/submit.sh:
- Around line 70-75: The current extraction uses job_id=$(echo "$submit_output"
| grep -oE '[0-9]+') which can capture unrelated digits; change the extraction
to target the "Submitted batch job" line and pull the job number from that line
(e.g., filter submit_output for the line containing "Submitted batch job" then
extract the numeric field) so job_id only contains the actual SLURM ID used by
monitor_slurm_job.sh; update the code that sets job_id and the subsequent
existence check to rely on this targeted extraction (references: variable
job_id, input submit_output, and the downstream monitor_slurm_job.sh
invocation).
🧹 Nitpick comments (1)
.github/workflows/frontier/submit.sh (1)

1-72: Consider extracting the shared submit-and-monitor logic into a common helper.

All three submit scripts (phoenix, frontier, frontier_amd) now share nearly identical post-submission code: output capture → job_id parsing → error handling → monitor invocation. A shared helper (e.g., .github/scripts/submit_and_monitor.sh) accepting the sbatch output and output_file would reduce the maintenance surface and the chance of future divergence.

Not urgent — the current duplication is manageable for three files.

Comment on lines +70 to +75
job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
if [ -z "$job_id" ]; then
echo "ERROR: Failed to submit job. sbatch output:"
echo "$submit_output"
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

grep -oE '[0-9]+' may capture extraneous numbers from sbatch output.

If sbatch emits any warnings or informational messages containing digits before the "Submitted batch job" line, job_id will contain multiple numbers (one per line). This would silently corrupt the argument passed to monitor_slurm_job.sh.

A more targeted extraction avoids this:

Proposed fix
-job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+job_id=$(echo "$submit_output" | grep -oP 'Submitted batch job \K[0-9]+')

If grep -P isn't available on the target system, an alternative:

-job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
+job_id=$(echo "$submit_output" | awk '/Submitted batch job/ {print $NF}')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
job_id=$(echo "$submit_output" | grep -oE '[0-9]+')
if [ -z "$job_id" ]; then
echo "ERROR: Failed to submit job. sbatch output:"
echo "$submit_output"
exit 1
fi
job_id=$(echo "$submit_output" | grep -oP 'Submitted batch job \K[0-9]+')
if [ -z "$job_id" ]; then
echo "ERROR: Failed to submit job. sbatch output:"
echo "$submit_output"
exit 1
fi
🤖 Prompt for AI Agents
In @.github/workflows/phoenix/submit.sh around lines 70 - 75, The current
extraction uses job_id=$(echo "$submit_output" | grep -oE '[0-9]+') which can
capture unrelated digits; change the extraction to target the "Submitted batch
job" line and pull the job number from that line (e.g., filter submit_output for
the line containing "Submitted batch job" then extract the numeric field) so
job_id only contains the actual SLURM ID used by monitor_slurm_job.sh; update
the code that sets job_id and the subsequent existence check to rely on this
targeted extraction (references: variable job_id, input submit_output, and the
downstream monitor_slurm_job.sh invocation).

sbryngelson and others added 7 commits February 8, 2026 18:40
The build step runs directly on the login node (not through sbatch),
so it also needs trap '' HUP to survive session drops.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The verbose flag was removed based on a pipe buffer theory that
turned out to be wrong — the real issue was SIGHUP from login
node session drops, which is now fixed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Verbose output stalls GitHub-hosted runners (pipe buffer issue)
but is safe on Frontier/Phoenix where output goes to SLURM files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The streaming build mode (-v) was reading stdout and stderr
sequentially, causing a classic subprocess deadlock when the
compiler's stderr filled the 64KB pipe buffer. Fix by merging
stderr into stdout with subprocess.STDOUT.

Now safe to use -v on all CI runners (GitHub-hosted and self-hosted).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Previously HDF5/Silo/post_process were only built during the Test
step, adding build time to test execution. Now the Build step
includes --test-all for MPI jobs so all targets are pre-built.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace lbl_comp with cluster field that maps to workflow directory
  names (phoenix, frontier, frontier_amd), making steps generic
- Collapse 6 conditional Build/Test steps into 2 generic ones
- Merge duplicate Archive Logs blocks into one
- Rename OPT1/OPT2 to TEST_ALL/TEST_PCT for clarity

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The pinned commit SHA breaks uv's shallow git fetch on macOS CI.
Track HEAD instead since there's no tag at that commit.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="toolchain/pyproject.toml">

<violation number="1">
P2: The git dependency is no longer pinned to a specific commit, which makes builds non-reproducible and risks pulling unreviewed upstream changes. Keep a fixed commit or tag to ensure deterministic installs.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

sbryngelson and others added 3 commits February 8, 2026 21:56
Stale build artifacts from a failed attempt cause instant re-failure.
Run ./mfc.sh clean before retrying to ensure a fresh build.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 9, 2026

CodeAnt AI is running Incremental review


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@codeant-ai codeant-ai bot added size:XL This PR changes 500-999 lines, ignoring generated files and removed size:L This PR changes 100-499 lines, ignoring generated files labels Feb 9, 2026
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 9, 2026

CodeAnt AI Incremental review completed.

@sbryngelson sbryngelson merged commit 0945009 into MFlowCode:master Feb 9, 2026
30 of 32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Review effort 4/5 size:XL This PR changes 500-999 lines, ignoring generated files

Development

Successfully merging this pull request may close these issues.

1 participant